AI-Generation Tools: What Students Should Know

By Patricia Roy
July 17, 2023

You have probably heard about ChatGPT and other AI tools by now. Your teachers may have already shared their policies regarding these tools with you, and it's possible you or someone you know has tried using them already. It would be an understatement to say AI-generation tools are controversial. Your professors probably have deep misgivings about them, even among those who are tech-savvy and open to web-first writing and design. Why should you, a college student, be concerned about AI? Consider these three significant issues.



Beyond Plagiarism: Creativity and Copyright

Any writer, artist, or coding expert will tell you that practice makes perfect when it comes to quality finished products. And practice takes time and effort. There is no substitution. The same is true for anything else you are trying to learn.

Many of you are creators yourselves. You might write, draw, paint, sing, play instruments, or create all kinds of original content that then gets shared on the web. Most of the data AI gathers to generate output comes from human creativity and not other AI. That's because the Internet itself is a collective human creation. Furthermore, a good portion of the creative content online is copyrighted.

No matter what prompts you type into ChapGPT, the result will likely contain "stolen" material (I put the word in quotation marks because the legality of AI use is not settled). Heck, AI may have even sampled some of your work already. You may not think it is a big deal now, but as you grow your career and collect artifacts for your work portfolio, you will start to care very quickly.

The fact that most material generated by ChatGPT or Midjourney (an AI tool for image generation) could reasonably be considered copyright infringement should give us pause.

Privacy? What Privacy?

We should all know that companies glean our private data with almost every mouse click. Despite this being common knowledge, polls indicate younger generations are less concerned about privacy invasion than older ones. If you're not doing anything illegal, you shouldn't be worried, right? Wrong.

The real threat here comes from bad actors using your data to commit crimes, usually identity theft or fraud, but other international cybercrimes are also possible. While we have gotten used to targeted ads and curated content, we might be better off as a society and as individuals if we pay more attention to the cookies and privacy notices on our screens. AI routinely skims data without a user's knowledge or consent, sometimes accidentally. This data can be used to identify and track people, access their search and purchase history, and possibly much more. This data can then be stored or repurposed far beyond the original instance and intention. To top it off, state and international laws remain outdated in response. Using a program like ChatGPT is not anonymous, either — users must sign in with an email, which adds to the pile of data a company can collect.

I find the psychological profiling of AI to be particularly disturbing, as its use by social media has been linked to the coarsening of civic dialogue and the eroding of trust in our media, government, and other institutions. An erosion of trust is not the same as healthy criticism, and a free society depends on citizens who understand the difference.

As young people, your increased presence online means that more of your data will be available for storage and manipulation.

Racial Bias and Equity At-Large

On a good day, the Internet is a treasure trove of information and discovery. On a bad day, it's a cesspool of the worst of humanity. Remember that AI bots cannot think critically and have no emotional intelligence. When we use AI to get information, cultural biases and stereotypes can be present in our results. The inability of AI to screen these is as cringy as a drunk uncle.

As the title suggests, this Wired article reminds us, "Fake Pictures of People of Color Won't Fix AI Bias." In some ways, the threat here is the opposite of that raised by privacy impacts: a world of hurt results when racial and other biases lead AI to identify people incorrectly. For example, a paper from 2019 reveals that facial recognition software uses datasets that are 80% light-skinned. While the software's creators might have intended it to reduce bias, it seems they have a long way to go before accomplishing this goal.

Final Thoughts

The impact of AI on scholarship and creative work remains to be seen but will likely be deeply transformative in years to come. There is no way to put AI back in the box. Its power could certainly be angled toward improving life on Earth if its creators are incentivized to do so. We should use the technology to learn what it can do, but we should also remember that the companies making and using AI have profit as a motivating incentive. Legislation lags behind in keeping negative impacts in check.

As young scholars and future leaders, how you use AI technology today will profoundly influence how our lives look tomorrow. As an educator and creator, I advise you not to dismiss your teachers' concerns as just more anti-plagiarism talk. Instead, talk to them about how their lives are already impacted and the larger societal issues at stake.

Patricia Roy

Patricia Roy is a writer and professor who has helped students succeed for over 25 years. She started her career as a high school English teacher and then moved into higher education at Tuition Rewards member school, Lasell University in Newton, Massachusetts. Her practical guidance and enthusiasm motivate and inspire students to fearlessly explore their own passions. Professor Roy is also a freelance writer and published poet.
View all posts